Introduction to Open Data Science - Course Project

About the project

Write a short description about the course and add a link to your GitHub repository here. This is an R Markdown (.Rmd) file so you should use R Markdown syntax.

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Mon Nov 23 10:09:29 2020"

The text continues here.

Learning R is an interesting process even though there are a lot of difficulties which can not be sloved sometimes. I hope I could learn more and more in this course.I found this course in weboodi.


Regression and model validation

This week I learned how to use R to wrangling data and make the linear regression analysis and evaluate the validation of model.

Here we go again…

In the exercise 2, the Code is as following:

This data files includes 166 rows (observations) and 7 columns(variables), which means there are 7 kinds of index for 166 students.The variables includes gender, age, attuitude, deep, dtra, surf, and points.

students2014 <- read.table('http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/learning2014.txt', sep = ',', header = TRUE)
dim(students2014)
## [1] 166   7
str(students2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...
library(ggplot2)
## Warning: package 'ggplot2' was built under R version 4.0.2
p1 <- ggplot(students2014, aes(x=attitude, y=points, col=gender))
p2 <- p1 + geom_point()
p2

p3 <- p2+geom_smooth(method = 'lm')
p4 <- p3+ggtitle('attitude of students versus exam points')
p4
## `geom_smooth()` using formula 'y ~ x'

pairs(students2014[-1])

library(GGally)
## Warning: package 'GGally' was built under R version 4.0.3
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
library(ggplot2)

p <- ggpairs(students2014, mapping=aes(col=gender, alpha=0.3),lowe=list(combo=wrap('facethist', bins=20)))
p

There is an extremely significant positive relationship between attitude of students and points.For male,age is negative related with points.

my_model1 <- lm (points ~ attitude + stra + surf, data = students2014)
summary(my_model1)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = students2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08
my_model2 <- lm (points ~ attitude + stra, data = students2014)
summary(my_model2)
## 
## Call:
## lm(formula = points ~ attitude + stra, data = students2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.6436  -3.3113   0.5575   3.7928  10.9295 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   8.9729     2.3959   3.745  0.00025 ***
## attitude      3.4658     0.5652   6.132 6.31e-09 ***
## stra          0.9137     0.5345   1.709  0.08927 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared:  0.2048, Adjusted R-squared:  0.1951 
## F-statistic: 20.99 on 2 and 163 DF,  p-value: 7.734e-09

In the first regression model, points as the target varable together with attitude, stra and surf as explantory variables, the model can be used because the P value is less than 0.001(extremely significant) even though there is no significantly relationships between stra, surf and points.

Model2 can explain the 20.74% changes of points.

par(mfrow=c(2,2))
plot(my_model2, which=c(1,2,5))

The data is mostly Gaussian distribution, according to the linear qq plots. The model is sensible based on these three plots.


Logistic Regression and cross validation

alc <- read.table('http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/alc.txt', sep=',', header = TRUE)

In this data set, there are 35 variables and 382 observations in total.The variables are: school,sex,age,address,famsize,Pstatus,Medu,Fedu,Mjob,Fjob reason,nursery,internet,guardian,traveltime studytime,failures,schoolsup,famsup,paid activities,higher,romantic,famrel,freetime goout,Dalc,Walc,health,absences,G1,G2,G3,alc_use,high_use.

I chose failures, absences,traveltime and activities as explainary variables to explore the relationsips between them and the target variable-high_use.The results are as below(three of them have significatly positive relationship with high_use): high_use=0.4176failures-1.0157 high_use=0.0683absences-1.2640 high_use=0.4290*traveltime-1.5146

So, the high_use valus will increase with the increase of failures, absences and traveltime. However, there is no significant relationship between activities and high_use.

library(ggplot2)
library(tidyr)
## Warning: package 'tidyr' was built under R version 4.0.2
library(dplyr)
## Warning: package 'dplyr' was built under R version 4.0.2
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
gather(alc) %>% glimpse
## Rows: 13,370
## Columns: 2
## $ key   <chr> "school", "school", "school", "school", "school", "school", "...
## $ value <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "...
gather(alc) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

g1 <- ggplot(alc, aes(x = high_use, y = failures, col = sex))
g1 + geom_boxplot() + ylab("failures")

g2 <- ggplot(alc, aes(x = high_use, y = absences, col = sex))
g2 + geom_boxplot() + ggtitle("Student absences by alcohol consumption and sex")

If we look at the combined bar chart, we can find the numbers of students who attend extra-curricular activities and the students who don’t are almost close, which can tell us the non-significat relationship between activities and high_use is sensible.Let’s use the same principle to explain the other relationships, and the results should be consistent with the original hypothese.

Based on the results of Logistic regression analysis between failures&absences&traveltime&activities and high_use, the fitted model is as below:

(Intercept) failures absences traveltime activitiesyes -1.86181552 0.36618544 0.07004812 0.42610620 -0.32180313

OR 2.5 % 97.5 % (Intercept) 0.1553903 0.0828279 0.2841262 failures 1.4422227 1.0755782 1.9361180 absences 1.0725598 1.0374448 1.1130908 traveltime 1.5312834 1.1164457 2.1057338 activitiesyes 0.7248409 0.4542950 1.1528149

high_use= 0.36619 * failures+0.07005 * absences + 0.42611 * travelime -1.86182

There is no significat relationship between activities and high_use, which is same with the previous hypothese, and should be removed fromt the whole model.

The 2x2 cross tabulation of predictions is as below:

prediction

high_use FALSE TRUE

FALSE 261 9

TRUE 87 25

m5 <- glm(high_use ~ failures+absences+traveltime, data = alc, family = "binomial")
probabilities <- predict(m5, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)
select(alc, failures, absences, traveltime, high_use, probability, prediction) %>% tail(10)
##     failures absences traveltime high_use probability prediction
## 373        1        0          2    FALSE   0.3105560      FALSE
## 374        1       14          2     TRUE   0.5424959       TRUE
## 375        0        2          2    FALSE   0.2616419      FALSE
## 376        0        7          3    FALSE   0.4322838      FALSE
## 377        1        0          1    FALSE   0.2285089      FALSE
## 378        0        0          1    FALSE   0.1686879      FALSE
## 379        1        0          2    FALSE   0.3105560      FALSE
## 380        1        0          2    FALSE   0.3105560      FALSE
## 381        0        3          2     TRUE   0.2752164      FALSE
## 382        0        0          3     TRUE   0.3194072      FALSE
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   260   10
##    TRUE     88   24
g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
g + geom_point()

table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table %>% addmargins
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.68062827 0.02617801 0.70680628
##    TRUE  0.23036649 0.06282723 0.29319372
##    Sum   0.91099476 0.08900524 1.00000000

According to the 10-folds cross-validation on your mode, I got a smaller error (0.2591623) than Datacamp, which means that my model has better test set performance.

I made two different fited model with more predictors;model6 has 5 predictors which are failures,absences,traveltime,higher as well as sex; while model 7 has 4 explainary factors.

The 10 folds cross validation results are as below:

m6-> 0.2408377 m7-> 0.2382199

Compared with m5 (0.2591623), the errors of m7 is the smallest one which has one more predictor than m5.The fitted model is:

(Intercept) failures absences traveltime sexM -2.51975566 0.36718070 0.07462037 0.39558320 0.96465837

high_use= 0.36718070 * failures + 0.07462037 * absences + 0.39558320 * traveltime + 0.96465837 * sexM -2.51975566


Clustering and classification

The boston data frame has 506 rows and 14 columns, which means that there are 14 variables. crim is per capita crime rate by town; zn is proportion of residential land zoned for lots over 25,000 sq.ft.; indus is the proportion of non-retail business acres per town; chas is Charles River dummy variable (= 1 if tract bounds river; 0 otherwise); nox is nitrogen oxides concentration (parts per 10 million); rm is average number of rooms per dwelling; age is proportion of owner-occupied units built prior to 1940; dis is weighted mean of distances to five Boston employment centres;rad is index of accessibility to radial highways; tax is full-value property-tax rate per $10,000; ptratio is pupil-teacher ratio by town.black is 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town; lstat means lower status of the population (percent); medv means median value of owner-occupied homes in $1000s.

According to the correlation matrix, there is an obverse positive relationship between rad and tax, indus and nox, while there is an negative relationship between dis and nox, indus as well as dis. Also, seems like there are no obverse correlations between chas and all the others.

# access the MASS package
library(MASS)
## Warning: package 'MASS' was built under R version 4.0.3
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
# load the data
data("Boston")

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

# save the correct classes from test data
correct_classes <- test$crime

# remove the crime variable from test data
test <- dplyr::select(test, -crime)

After scaling the dataset, all the numbers become much smaller than oringinal ones and all the mean values are zero.

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2500000 0.2450495 0.2475248 0.2574257 
## 
## Group means:
##                   zn      indus        chas        nox          rm        age
## low       0.90265828 -0.9666313 -0.07742312 -0.8722582  0.43292068 -0.8954392
## med_low  -0.06539846 -0.3197867 -0.03371693 -0.5860247 -0.16464855 -0.3090766
## med_high -0.38347752  0.1965965  0.20012296  0.3838094  0.07993772  0.4279782
## high     -0.48724019  1.0149946 -0.08304540  1.0645106 -0.39752441  0.8167465
##                 dis        rad        tax     ptratio       black       lstat
## low       0.9508873 -0.7010085 -0.7335666 -0.45188475  0.36630332 -0.76521528
## med_low   0.4282093 -0.5445257 -0.4700867 -0.02378498  0.31176268 -0.10969886
## med_high -0.4077193 -0.4352009 -0.3191413 -0.30510403  0.06007642  0.04763112
## high     -0.8613193  1.6596029  1.5294129  0.80577843 -0.84622875  0.90620472
##                 medv
## low       0.52479558
## med_low  -0.03530737
## med_high  0.13288897
## high     -0.69692909
## 
## Coefficients of linear discriminants:
##                  LD1         LD2         LD3
## zn       0.184478116  0.49641843 -0.99256208
## indus    0.003893237 -0.35071073  0.46663638
## chas    -0.104095802 -0.01378560  0.09256502
## nox      0.248475006 -0.83668554 -1.39882157
## rm      -0.150942077 -0.12351181 -0.22735696
## age      0.212316274 -0.36725249  0.10528519
## dis     -0.229357720 -0.12558982  0.38227755
## rad      3.724733585  0.82076081  0.04787377
## tax      0.025834834  0.24831857  0.36339482
## ptratio  0.146741292  0.03487187 -0.26542221
## black   -0.145511066  0.01113759  0.11684049
## lstat    0.133175486 -0.13366613  0.28675253
## medv     0.146798760 -0.26042455 -0.16308941
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9602 0.0300 0.0099
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       17       7        2    0
##   med_low    7      11        9    0
##   med_high   0      10       14    2
##   high       0       0        1   22
r = getOption("repos")
r["CRAN"] = "http://cran.us.r-project.org"
options(repos = r)

# access the MASS package
library(MASS)

data('Boston')

# center and standardize variables
boston_scaled <- scale(Boston)

# euclidean distance matrix
dist_eu <- dist(boston_scaled)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(boston_scaled, method = 'manhattan')

# look at the summary of the distances
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618
# k-means clustering
km <-kmeans(boston_scaled, centers = 3)

# plot the Boston dataset with clusters
pairs(boston_scaled, col = km$cluster)

## Bonus

# k-means clustering
km <-kmeans(boston_scaled, centers = 3)

# plot the Boston dataset with clusters
pairs(boston_scaled, col = km$cluster)

## Superbonus

model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

install.packages("plotly")
## Installing package into 'C:/rlibs/4.0.1'
## (as 'lib' is unspecified)
## package 'plotly' successfully unpacked and MD5 sums checked
## 
## The downloaded binary packages are in
##  C:\Users\siqizhao\AppData\Local\Temp\RtmpCy92ha\downloaded_packages
plotly::plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z =  matrix_product$LD3, type= 'scatter3d', mode='markers', color=train$crime)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.